617 research outputs found

    Interface refactoring in performance-constrained web services

    Get PDF
    This paper presents the development of REF-WS an approach to enable a Web Service provider to reliably evolve their service through the application of refactoring transformations. REF-WS is intended to aid service providers, particularly in a reliability and performance constrained domain as it permits upgraded ā€™non-backwards compatibleā€™ services to be deployed into a performance constrained network where existing consumers depend on an older version of the service interface. In order for this to be successful, the refactoring and message mediation needs to occur without affecting functional compatibility with the servicesā€™ consumers, and must operate within the performance overhead expected of the original service, introducing as little latency as possible. Furthermore, compared to a manually programmed solution, the presented approach enables the service developer to apply and parameterize refactorings with a level of confidence that they will not produce an invalid or ā€™corruptā€™ transformation of messages. This is achieved through the use of preconditions for the defined refactorings

    A Graph-Based Approach to Address Trust and Reputation in Ubiquitous Networks

    Get PDF
    The increasing popularity of virtual computing environments such as Cloud and Grid computing is helping to drive the realization of ubiquitous and pervasive computing. However, as computing becomes more entrenched in everyday life, the concepts of trust and risk become increasingly important. In this paper, we propose a new graph-based theoretical approach to address trust and reputation in complex ubiquitous networks. We formulate trust as a function of quality of a task and time required to authenticate agent-to-agent relationship based on the Zero-Common Knowledge (ZCK) authentication scheme. This initial representation applies a graph theory concept, accompanied by a mathematical formulation of trust metrics. The approach we propose increases awareness and trustworthiness to agents based on the values estimated for each requested task, we conclude by stating our plans for future work in this area

    Imaging compaction band propagation in Diemelstadt sandstone using acoustic emission locations

    Get PDF
    We report results from a conventional triaxial test performed on a specimen of Diemelstadt sandstone under an effective confining pressure of 110 MPa; a value sufficient to induce compaction bands. The maximum principal stress was applied normal to the visible bedding so that compaction bands propagated parallel to bedding. The spatio-temporal distribution of acoustic emission events greater than 40 dB in amplitude, and associated with the propagation of the first compaction band, were located in 3D, to within +/- 2 mm, using a Hyperion Giga-RAM recorder. Event magnitudes were used to calculate the seismic b- value at intervals during band growth. Results show that compaction bands nucleate at the specimen edge and propagate across the sample at approximately 0.08 mm s(-1). The seismic b-value does not vary significantly during deformation, suggesting that compaction band growth is characterized by small scale cracking that does not change significantly in scale

    Timely Long Tail Identification through Agent Based Monitoring and Analytics

    Get PDF
    The increasing complexity and scale of distributed systems has resulted in the manifestation of emergent behavior which substantially affects overall system performance. A significant emergent property is that of the "Long Tail", whereby a small proportion of task stragglers significantly impact job execution completion times. To mitigate such behavior, straggling tasks occurring within the system need to be accurately identified in a timely manner. However, current approaches focus on mitigation rather than identification, which typically identify stragglers too late in the execution lifecycle. This paper presents a method and tool to identify Long Tail behavior within distributed systems in a timely manner, through a combination of online and offline analytics. This is achieved through historical analysis to profile and model task execution patterns, which then inform online analytic agents that monitor task execution at runtime. Furthermore, we provide an empirical analysis of two large-scale production Cloud data enters that demonstrate the challenge of data skew within modern distributed systems, this analysis shows that approximately 5% of task stragglers caused by data skew impact 50% of the total jobs for batch processes. Our results demonstrate that our approach is capable of identifying task stragglers less than 11% into their execution lifecycle with 98% accuracy, signifying significant improvement over current state-of-the-art practice and enables far more effective mitigation strategies in large-scale distributed systems worldwide

    An Approach for Modeling and Ranking Node-level Stragglers in Cloud Datacenters

    Get PDF
    The ability of servers to effectively execute tasks within Cloud datacenters varies due to heterogeneous CPU and memory capacities, resource contention situations, network configurations and operational age. Unexpectedly slow server nodes (node-level stragglers) result in assigned tasks becoming task-level stragglers, which dramatically impede parallel job execution. However, it is currently unknown how slow nodes directly correlate to task straggler manifestation. To address this knowledge gap, we propose a method for node performance modeling and ranking in Cloud datacenters based on analyzing parallel job execution tracelog data. By using a production Cloud system as a case study, we demonstrate how node execution performance is driven by temporal changes in node operation as opposed to node hardware capacity. Different sample sets have been filtered in order to evaluate the generality of our framework, and the analytic results demonstrate that node abilities of executing parallel tasks tend to follow a 3-parameter-loglogistic distribution. Further statistical attribute values such as confidence interval, quantile value, extreme case possibility, etc. can also be used for ranking and identifying potential straggler nodes within the cluster. We exploit a graph-based algorithm for partitioning server nodes into five levels, with 0.83% of node-level stragglers identified. Our work lays the foundation towards enhancing scheduling algorithms by avoiding slow nodes, reducing task straggler occurrence, and improving parallel job performance

    Reducing Late-Timing Failure at Scale: Straggler Root-Cause Analysis in Cloud Datacenters

    Get PDF
    Task stragglers hinder effective parallel job execution in Cloud datacenters, resulting in late-timing failures due to the violation of specified timing constraints. Stragglertolerant methods such as speculative execution provide limited effectiveness due to (i) lack of precise straggler root-cause knowledge and (ii) straggler identification occurring too late within a job lifecycle. This paper proposes a method to ascertain underlying straggler root-causes by analyzing key parameters within large-scale distributed systems, and to determine the correlation between straggler occurrence and factors including resource contention, task concurrency, and server failures. Our preliminary study of a production Cloud datacenter indicates that the dominate straggler root-cause is resultant of high temporal resource contention. The result can assist in enhancing straggler prediction and mitigation for tolerating late-timing failures within large-scale distributed systems

    Mitigate data skew caused stragglers through ImKP partition in MapReduce

    Get PDF
    Speculative execution is the mechanism adopted by current MapReduce framework when dealing with the straggler problem, and it functions through creating redundant copies for identified stragglers. The result of the quicker task will be adopted to improve the overall job execution performance. Although proved to be effective for contention caused stragglers, speculative execution can easily meet its bottleneck when mitigating data skew caused stragglers due to its replication nature: the identical unbalanced input data will lead to a slow speculative task. The Map inputs are typically even in size according to the HDFS block configuration, therefore the skew caused stragglers happen mainly in the Reduce phase because of the unknown intermediate key distribution. In this paper, we focus on mitigating data skew caused Reduce stragglers, propose ImKP, an Intermediate Key Pre-processing framework that enables the even distributed partition for Reduce inputs. A group based ranking technique has been developed that dramatically decreases the pre-processing time, and ImKP manages to eliminate this timing overhead through parallelizing the pre-processing with the file uploading procedure (from local file system to HDFS). For jobs that take input directly from HDFS, ImKP minimizes the overhead by storing the mapping result on every node within the cluster for reuse. Experiments are conducted on different datasets with various workloads. Results show that, compared to the popular hash partition, ImKP can dramatically decrease Reduce skew, achieving a 99.8% reduction in the coefficient of variation of the input sizes in average, and improve up to 29.37% job response performance

    Adaptive Speculation for Efficient Internetware Application Execution in Clouds

    Get PDF
    Modern Cloud computing systems are massive in scale, featuring environments that can execute highly dynamic Internetware applications with huge numbers of interacting tasks. This has led to a substantial challenge the straggler problem, whereby a small subset of slow tasks signiļ¬cantly impede parallel job completion. This problem results in longer service responses, degraded system performance, and late timing failures that can easily threaten Quality of Service (QoS) compliance. Speculative execution (or speculation) is the prominent method deployed in Clouds to tolerate stragglers by creating task replicas at runtime. The method detects stragglers by specifying a predeļ¬ned threshold to calculate the difference between individual tasks and the average task progression within a job. However, such a static threshold debilitates speculation effectiveness as it fails to capture the intrinsic diversity of timing constraints in Internetware applications, as well as dynamic environmental factors such as resource utilization. By considering such characteristics, different levels of strictness for replica creation can be imposed to adaptively achieve speciļ¬ed levels of QoS for different applications. In this paper we present an algorithm to improve the execution efļ¬ciency of Internetware applications by dynamically calculating the straggler threshold, considering key parameters including job QoS timing constraints, task execution progress, and optimal system resource utilization. We implement this dynamic straggler threshold into the YARN architecture to evaluate itā€™s effectiveness against existing state-of-the-art solutions. Results demonstrate that the proposed approach is capable of reducing parallel job response times by up to 20% compared to the static threshold, as well as a higher speculation success rate, achieving up to 66.67% against 16.67% in comparison to the static method

    Multi-tenancy in cloud computing

    Get PDF
    As Cloud Computing becomes the trend of information technology computational model, the Cloud security is becoming a major issue in adopting the Cloud where security is considered one of the most critical concerns for the large customers of Cloud (i.e. governments and enterprises). Such valid concern is mainly driven by the Multi-Tenancy situation which refers to resource sharing in Cloud Computing and its associated risks where confidentiality and/or integrity could be violated. As a result, security concerns may harness the advancement of Cloud Computing in the market. So, in order to propose effective security solutions and strategies a good knowledge of the current Cloud implementations and practices, especially the public Clouds, must be understood by professionals. Such understanding is needed in order to recognize attack vectors and attack surfaces. In this paper we will propose an attack model based on a threat model designed to take advantage of Multi-Tenancy situation only. Before that, a clear understanding of Multi-Tenancy, its origin and its benefits will be demonstrated. Also, a novel way on how to approach Multi-Tenancy will be illustrated. Finally, we will try to sense any suspicious behavior that may indicate to a possible attack where we will try to recognize the proposed attack model empirically from Google trace logs. Google trace logs are a 29-day worth of data released by Google. The data set was utilized in reliability and power consumption studies, but not been utilized in any security study to the extent of our knowledge

    Intelligent Resource Scheduling at Scale: a Machine Learning Perspective

    Get PDF
    Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement
    • ā€¦
    corecore